On the penalty factor for autoregressive order selection in finite samples
نویسندگان
چکیده
Fig. 1. Absolute value of the crosstalk with respect to the number of samples (NS) used to estimate the cross cumulants. Each point is the average of 10 experiments. to estimate the cross-cumulants. Each point in Fig. 1 corresponds to the average over 10 experiments, in which the mixing matrix is randomly chosen: The matrix entries mC3 (i # j) are random numbers in the range [-1, +1]. With 500 samples, a residual crosstalk of about-20 dB is obtained. In the case of nonstationary signals, cross-cumulant estimation must be done on few samples and has a larger variance. Consequently, it can lead to more inaccurate estimation of the mixing matrix. We still obtained an interesting performance: a residual crosstalk of about-15 to-20 dB, with various signals (colored noise, speech) and statistics estimated over 500 samples. In this correspondence, we proved that the mixing matrix can be. estimated using fourth-ordercross-cumulants, for two mixtures of two non-Gaussian sources. Solutions are obtained by rooting a fourth-order polynomial equation. Using second-order cross-cumulants allows us to simplify the method; the solution is then obtained by rooting two second-order polynomial equations and gives the result if one source is Gaussian. The methods are then quite simple, but its roots are very sensitive to the accuracy of the estimated cumulants. In fact, this direct solution is less accurate than indirect methods, especially adaptive a l g o r i b s. Moreover, we restricted the study to the separation of two sources, and theoretical solutions for three sources or more seems not easily tractable. However, in the case of two mixtures of two sources, it may give a good starting point with a small computation cost for any adaptive algorithm. REFERENCES J.-F. Cardoso, " Blind identification of independent signals, " in Proc.
منابع مشابه
Order selection for vector autoregressive models
Order-selection criteria for vector autoregressive (AR) modeling are discussed. The performance of an order-selection criterion is optimal if the model of the selected order is the most accurate model in the considered set of estimated models: here vector AR models. Suboptimal performance can be a result of underfit or overfit. The Akaike information criterion (AIC) is an asymptotically unbiase...
متن کاملVector Autoregressive Model Selection: Gross Domestic Product and Europe Oil Prices Data Modelling
We consider the problem of model selection in vector autoregressive model with Normal innovation. Tests such as Vuong's and Cox's tests are provided for order and model selection, i.e. for selecting the order and a suitable subset of regressors, in vector autoregressive model. We propose a test as a modified log-likelihood ratio test for selecting subsets of regressors. The Europe oil prices, ...
متن کاملConsistent estimation and order selection for non-stationary autoregressive processes with stable innovations
A possibly non-stationary autoregressive process, of unknown finite order, with possibly infinite-variance innovations is studied. The Ordinary Least Squares autoregressive parameter estimates are shown to be consistent, and their rate of convergence, which depends on the index of stability, α, is established. We also establish consistency of lag-order selection criteria in the non-stationary c...
متن کاملFinite sample criteria for autoregressive order selection
The quality of selected AR models depends on the true process in the finite sample practice, on the number of observations, on the estimation algorithm, and on the order selection criterion. Samples are considered to be finite if the maximum candidate model order for selection is greater than 10, where denotes the number of observations. Finite sample formulae give empirical approximations for ...
متن کاملA Modified Grey Wolf Optimizer by Individual Best Memory and Penalty Factor for Sonar and Radar Dataset Classification
Meta-heuristic Algorithms (MA) are widely accepted as excellent ways to solve a variety of optimization problems in recent decades. Grey Wolf Optimization (GWO) is a novel Meta-heuristic Algorithm (MA) that has been generated a great deal of research interest due to its advantages such as simple implementation and powerful exploitation. This study proposes a novel GWO-based MA and two extra fea...
متن کاملAutoregressive Lag—Order Selection Using Conditional Saddlepoint Approximations
A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated, unlike penalty-based model selection methods. Extensive simulation results indicate that...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- IEEE Trans. Signal Processing
دوره 44 شماره
صفحات -
تاریخ انتشار 1996